target state
- North America > Puerto Rico (0.04)
- North America > United States > Texas (0.04)
- North America > United States > South Dakota (0.04)
- (23 more...)
- Research Report > New Finding (1.00)
- Overview (0.67)
- Education (0.92)
- Health & Medicine > Therapeutic Area (0.46)
- Health & Medicine > Diagnostic Medicine > Imaging (0.45)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.67)
- Information Technology > Data Science > Data Mining (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
- North America > Puerto Rico (0.04)
- North America > United States > Texas (0.04)
- North America > United States > South Dakota (0.04)
- (23 more...)
- Research Report > New Finding (1.00)
- Overview (0.67)
- Education (0.92)
- Health & Medicine > Therapeutic Area (0.46)
- Health & Medicine > Diagnostic Medicine > Imaging (0.45)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.67)
- Information Technology > Data Science > Data Mining (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
- Asia > China > Beijing > Beijing (0.04)
- Asia > British Indian Ocean Territory > Diego Garcia (0.04)
- Asia > China > Hong Kong (0.04)
- Asia > China > Guangdong Province > Guangzhou (0.04)
- North America > United States > Minnesota (0.04)
- North America > United States > Kansas > Sheridan County (0.04)
- North America > United States > Illinois (0.04)
- North America > Canada > Manitoba (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Systems & Languages (0.85)
- Information Technology > Artificial Intelligence > Machine Learning > Evolutionary Systems (0.68)
Singularity-free dynamical invariants-based quantum control
Sareen, Ritik, Youssry, Akram, Peruzzo, Alberto
State preparation is a cornerstone of quantum technologies, underpinning applications in computation, communication, and sensing. Its importance becomes even more pronounced in non-Markovian open quantum systems, where environmental memory and model uncertainties pose significant challenges to achieving high-fidelity control. Invariant-based inverse engineering provides a principled framework for synthesizing analytic control fields, yet existing parameterizations often lead to experimentally infeasible, singular pulses and are limited to simplified noise models such as those of Lindblad form. Here, we introduce a generalized invariant-based protocol for single-qubit state preparation under arbitrary noise conditions. The control proceeds in two-stages: first, we construct a family of bounded pulses that achieve perfect state preparation in a closed system; second, we identify the optimal member of this family that minimizes the effect of noise. The framework accommodates both (i) characterized noise, enabling noise-aware control synthesis, and (ii) uncharacterized noise, where a noise-agnostic variant preserves robustness without requiring a master-equation description. Numerical simulations demonstrate high-fidelity state preparation across diverse targets while producing smooth, hardware-feasible control fields. This singularity-free framework extends invariant-based control to realistic open-system regimes, providing a versatile route toward robust quantum state engineering on NISQ hardware and other platforms exhibiting non-Markovian dynamics.
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > France (0.04)
- (3 more...)
- North America > United States > New York (0.04)
- North America > Puerto Rico (0.04)
- North America > United States > Texas (0.04)
- (24 more...)
- Research Report > New Finding (1.00)
- Overview (0.67)
- Education (1.00)
- Health & Medicine > Therapeutic Area (0.46)
- Health & Medicine > Diagnostic Medicine > Imaging (0.45)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.67)
- Information Technology > Data Science > Data Mining (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
- North America > Puerto Rico (0.04)
- North America > United States > Texas (0.04)
- North America > United States > South Dakota (0.04)
- (23 more...)
- Research Report > New Finding (1.00)
- Overview (0.67)
- Education (0.92)
- Health & Medicine > Therapeutic Area (0.46)
- Health & Medicine > Diagnostic Medicine > Imaging (0.45)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.67)
- Information Technology > Data Science > Data Mining (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
- Asia > China > Beijing > Beijing (0.04)
- Asia > British Indian Ocean Territory > Diego Garcia (0.04)
- Asia > China > Hong Kong (0.04)
- Asia > China > Guangdong Province > Guangzhou (0.04)
RTFF: Random-to-Target Fabric Flattening Policy using Dual-Arm Manipulator
Tang, Kai, Bhattacharya, Dipankar, Xu, Hang, Tokuda, Fuyuki, Tien, Norman C., Kosuge, Kazuhiro
Robotic fabric manipulation in garment production for sewing, cutting, and ironing requires reliable flattening and alignment, yet remains challenging due to fabric deformability, effectively infinite degrees of freedom, and frequent occlusions from wrinkles, folds, and the manipulator's End-Effector (EE) and arm. To address these issues, this paper proposes the first Random-to-Target Fabric Flattening (RTFF) policy, which aligns a random wrinkled fabric state to an arbitrary wrinkle-free target state. The proposed policy adopts a hybrid Imitation Learning-Visual Servoing (IL-VS) framework, where IL learns with explicit fabric models for coarse alignment of the wrinkled fabric toward a wrinkle-free state near the target, and VS ensures fine alignment to the target. Central to this framework is a template-based mesh that offers precise target state representation, wrinkle-aware geometry prediction, and consistent vertex correspondence across RTFF manipulation steps, enabling robust manipulation and seamless IL-VS switching. Leveraging the power of mesh, a novel IL solution for RTFF-Mesh Action Chunking Transformer (MACT)-is then proposed by conditioning the mesh information into a Transformer-based policy. The RTFF policy is validated on a real dual-arm tele-operation system, showing zero-shot alignment to different targets, high accuracy, and strong generalization across fabrics and scales. Project website: https://kaitang98.github.io/RTFF_Policy/
- Asia > China > Hong Kong (0.05)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
Zero-Shot Transferable Solution Method for Parametric Optimal Control Problems
Li, Xingjian, Kan, Kelvin, Verma, Deepanshu, Kumar, Krishna, Osher, Stanley, Drgoňa, Ján
This paper presents a transferable solution method for optimal control problems with varying objectives using function encoder (FE) policies. Traditional optimization-based approaches must be re-solved whenever objectives change, resulting in prohibitive computational costs for applications requiring frequent evaluation and adaptation. The proposed method learns a reusable set of neural basis functions that spans the control policy space, enabling efficient zero-shot adaptation to new tasks through either projection from data or direct mapping from problem specifications. The key idea is an offline-online decomposition: basis functions are learned once during offline imitation learning, while online adaptation requires only lightweight coefficient estimation. Numerical experiments across diverse dynamics, dimensions, and cost structures show our method delivers near-optimal performance with minimal overhead when generalizing across tasks, enabling semi-global feedback policies suitable for real-time deployment.
- Research Report (0.50)
- Overview (0.47)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.95)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.68)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.61)